128 research outputs found

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. Nutzungsbedarfsänderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der Rückschlüsse (rein) aus Fakten und Prämissen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die Beschränkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-Ansätze für dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch Abhängigkeiten zwischen und innerhalb einzelner Attribute ausreichend berücksichtigen können. Diese Arbeit präsentiert ein Prozessmodell für das integrierte Reasoning über quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewährleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. Zunächst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-Gerüst formalisiert. Anschließend wird das Gerüst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzuführen. Das Prozessmodell enthält fünf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewährleisten und Fehleranfälligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die Durchführbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten Ausführung von Hydro-Meteorologie-Modellen erläutert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten Automatisierungsansätze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen

    Changing EDSS progression in placebo cohorts in relapsing MS: A systematic review and meta-regression

    Full text link
    Background: Recent systematic reviews of randomised controlled trials (RCTs) in relapsing multiple sclerosis (RMS) revealed a decrease in placebo annualized relapse rates (ARR) over the past two decades. Furthermore, regression to the mean effects were observed in ARR and MRI lesion counts. It is unclear whether disease progression measured by the expanded disability status scale (EDSS) exhibits similar features. Methods: A systematic review of RCTs in RMS was conducted extracting data on EDSS and baseline characteristics. The logarithmic odds of disease progression were modelled to investigate time trends. Random-effects models were used to account for between-study variability; all investigated models included trial duration as a predictor to correct for unequal study durations. Meta-regressions were conducted to assess the prognostic value of a number of baseline variables. Results: The systematic literature search identified 39 studies, including a total of 19,714 patients. The proportion of patients in placebo controls experiencing a disease progression decreased over the years (p<0.001). Meta regression identified associated covariates including the size of the study and its duration that in part explained the time trend. Progression probabilities tended to be lower in the second year compared to the first year with a reduction of 24% in progression probability from year 1 to year 2 (p=0.014). Conclusion: EDSS disease progression exhibits similar behaviour over time as the ARR and point to changes in trial characteristics over the years, questioning comparisons between historical and recent trials.Comment: 17 pages, 2 figure

    Architectural Constraints for Pervasive Adaptive Applications

    Get PDF
    To face the challenge in today's mobile applications, that software entities and devices enter and leave the application scope very frequently, component-based architectures are used more and more. With the flexibility of this concept and the ability to handle a huge amount of situations come inpredictability and less reliability of the application. This article presents a ``safety net'' weaved by architectural constraints and an internal DSL to ensure the integrity of the whole application even after multiple reconfigurations. With this integrated, not graph-oriented approach, software-systems can be much more flexible in combination with less code complexity, and the responsibility of architectural integrity is moved from the developer to the application

    Identification of Deposited Oil Structures on Thin Porous Oil Mist Filter Media Applying µ-CT Imaging Technique

    Get PDF
    The identification of microscale oil structures formed from deposited oil droplets on the filter front face of a coalescence filter medium is essential to understand the initial state of the coalescence filtration process. Using μ-CT imaging and a deep learning tool for segmentation, this work presents a novel approach to visualize and identify deposited oil structures as oil droplets on fibers or oil sails between adjacent fibers of different sizes, shapes and orientations. Furthermore, the local and global porosity, saturation and fiber ratios of different fiber material of the oleophilic filter medium was compared and evaluated. Especially the local and global porosity of the filter material showed great accordance. Local and global saturation as well as the fiber ratios on local and global scale had noticeable differences which can mainly be attributed to the small field of view of the μ-CT scan (350 μm on 250 μm) or the minimal resolution of approximately 1 μm. Finally, fiber diameters of the investigated filter material were analyzed, showing a good agreement with the manufacturer’s specifications. The analytical approach to visualize and analyze the deposited oil structures was the main emphasis of this work

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. Nutzungsbedarfsänderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der Rückschlüsse (rein) aus Fakten und Prämissen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die Beschränkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-Ansätze für dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch Abhängigkeiten zwischen und innerhalb einzelner Attribute ausreichend berücksichtigen können. Diese Arbeit präsentiert ein Prozessmodell für das integrierte Reasoning über quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewährleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. Zunächst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-Gerüst formalisiert. Anschließend wird das Gerüst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzuführen. Das Prozessmodell enthält fünf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewährleisten und Fehleranfälligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die Durchführbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten Ausführung von Hydro-Meteorologie-Modellen erläutert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten Automatisierungsansätze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen

    Diagnostic indices for vertiginous diseases

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Vertigo and dizziness are symptoms which are reported frequently in clinical practice. We aimed to develop diagnostic indices for four prevalent vertiginous diseases: benign paroxysmal positional vertigo (BPPV), Menière's disease (MD), vestibular migraine (VM), and phobic postural vertigo (PPV).</p> <p>Methods</p> <p>Based on a detailed questionnaire handed out to consecutive patients presenting for the first time in our dizziness clinic we preselected a set of seven questions with desirable diagnostic properties when compared with the final diagnosis after medical workup. Using exact logistic regression analysis diagnostic scores, each comprising of four to six items that can simply be added up, were built for each of the four diagnoses.</p> <p>Results</p> <p>Of 193 patients 131 questionnaires were left after excluding those with missing consent or data. Applying the suggested cut-off points, sensitivity and specificity were 87.5 and 93.5% for BPPV, 100 and 87.4% for MD, 92.3 and 83.7% for VM, 73.7 and 84.1% for PPV, respectively. By changing the cut-off points sensitivity and specificity can be adjusted to meet diagnostic needs.</p> <p>Conclusions</p> <p>The diagnostic indices showed promising diagnostic properties. Once further validated, they could provide an ease to use and yet flexible tool for screening vertigo in clinical practice and epidemiological research.</p

    Quantum computing as an enabling technology for the next business cycle

    Full text link
    We need more computing capacity for the next growth cycle, and computers with conventional transistor technology are reaching their limits. So new ideas are required. The quantum computer, which overcomes the binary system and is not based on silicon microchips, could be a solution. This technology will continue to develop exponentially and transform science, the economy, and society. Furthermore, the paradigm of quantum communication offers an entirely novel possibility of distributed computing by allowing quantum computers to be networked via quantum channels to intrinsically secure communication. This article explains how quantum computers exploit new phenomena that do not occur in classical physics. Along the four primary application areas identified (optimization, simulation, machine learning, and cryptography), we describe possible applications in various industries. Our critical appraisal presents the technical challenges that still hold the potential for quantum computing to complement traditional computing systems. Accordingly, small and mid-sized companies do not necessarily need to invest in quantum computers but in their use. Quantum as a service can be the first step for visionary leaders to get familiar with it and gain a competitive advantage early on
    • …
    corecore